Skip to content

PARQ quantizer support for torchao's weight-only configs #2091

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Apr 28, 2025

Conversation

lisjin
Copy link
Contributor

@lisjin lisjin commented Apr 21, 2025

This is the first step in supporting torchao.quantize_ for PARQ trained models. I target only Int4WeightOnlyConfig and IntxWeightOnlyConfig for now since PARQ does not have activation quantization.

Instead of converting the state (e.g., scale, zero point) from PARQ's existing quantizers to torchao format, I decided to create a new quantizer UnifTorchaoQuantizer. This quantizer calls torchao's quantization primitives choose_qparams_affine, quantize_affine, dequantize_affine to ensure parity between the two QAT methods.

@metascroy It would be great if you could check the correctness of how the quantizer in TestUnifTorchaoQuantizer.test_intx_weight_only is initialized. I'm not sure if I missed any subtleties with int8.

Copy link

pytorch-bot bot commented Apr 21, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2091

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 073e1fa with merge base e3db2b2 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 21, 2025
@lisjin lisjin added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Apr 21, 2025
@lisjin lisjin force-pushed the parq branch 4 times, most recently from 48520cb to bea111c Compare April 22, 2025 15:22
@metascroy
Copy link
Contributor

@lisjin can you give a little code snippet of our QAT prepare/convert would work for this API?

I'm having trouble following. Here are some example code snippets from other APIs: https://fb.workplace.com/groups/pytorch.edge2.team/permalink/1186139489308568/

@lisjin lisjin requested a review from metascroy April 23, 2025 21:27
@lisjin lisjin force-pushed the parq branch 2 times, most recently from f6362f7 to fb7f521 Compare April 24, 2025 02:08
@andrewor14
Copy link
Contributor

Hi @lisjin, do you mind adding a code snippet on the main README on what the end-to-end flow would look like? My understanding is you can just replace the LSBQuantizer there with your new UnifTorchaoQuantizer. Then what happens after training? Do we call quantize_(model, Int4WeightOnlyConfig) as before? Would be good to clarify

@lisjin
Copy link
Contributor Author

lisjin commented Apr 25, 2025

@andrewor14 Thanks for the feedback—I removed config from UnifTorchaoQuantizer. In the README, I've also added a side-by-side comparison of PARQ vs. torchao prepare and convert steps. After PARQ training, we call optimizer.torchao_quantize_(model, config). Let me know if there's anything missing.

@andrewor14
Copy link
Contributor

Looks great, thanks @lisjin! The README is very clear.

One thing I want to discuss is whether we can just use a new PARQConfig instead so the PARQ flow looks more like the existing torchao QAT flow. This is current convert flow in the PR now:

config = IntXWeightOnlyConfig(weight_dtype=torch.int4, granularity=PerGroup(32))
optimizer.torchao_quantize_(model, config)

What do you think about something like this instead?

inner_config = IntXWeightOnlyConfig(weight_dtype=torch.int4, granularity=PerGroup(32))
parq_config = PARQConfig(optimizer, inner_config)
quantize_(model, parq_config)

Also curious if @metascroy has any thoughts on this

Copy link
Contributor

@andrewor14 andrewor14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me other than the recursion comment. @metascroy any other thoughts?

@metascroy
Copy link
Contributor

Looks good to me! Thanks @lisjin!

Can we add an end-to-end test_intx_weight_only_e2e for intx (with various x-values), similar to test_int4_weight_only_e2e?

Comment on lines +246 to +247
@common_utils.parametrize("b", [2, 3, 4, 8])
def test_intx_weight_only_e2e(self, b: int = 2, group_size: int = 32):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@metascroy Thanks for looking it over! I've added this end-to-end test, along with mapping_type=MappingType.SYMMETRIC and target_dtype=torch.int8 defaults for UnifTorchaoQuantizer

@lisjin lisjin requested review from andrewor14 and metascroy April 28, 2025 14:14
@facebook-github-bot
Copy link
Contributor

@lisjin has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@lisjin lisjin merged commit 8334340 into pytorch:main Apr 28, 2025
19 of 20 checks passed
@lisjin lisjin deleted the parq branch April 28, 2025 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants